Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model. A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique. In this paper, we study observed variables that are a linear transformation of a linear latent causal model. Data from interventions are necessary for identifiability: if one latent variable is missing an intervention, we show that there exist distinct models that cannot be distinguished. Conversely, we show that a single intervention on each latent variable is sufficient for identifiability. Our proof uses a generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model. We corroborate our theoretical results with a method for causal disentanglement that accurately recovers a latent causal model.
translated by 谷歌翻译
跨学科的一个重要问题是发现产生预期结果的干预措施。当可能的干预空间很大时,需要进行详尽的搜索,需要实验设计策略。在这种情况下,编码变量之间的因果关系以及因此对系统的影响,对于有效地确定理想的干预措施至关重要。我们开发了一种迭代因果方法来识别最佳干预措施,这是通过分布后平均值和所需目标平均值之间的差异来衡量的。我们制定了一种主动学习策略,该策略使用从不同干预措施中获得的样本来更新有关基本因果模型的信念,并确定对最佳干预措施最有用的样本,因此应在下一批中获得。该方法采用了因果模型的贝叶斯更新,并使用精心设计的,有因果关系的收购功能优先考虑干预措施。此采集函数以封闭形式进行评估,从而有效优化。理论上以信息理论界限和可证明的一致性结果在理论上基于理论上的算法。我们说明了综合数据和现实世界生物学数据的方法,即来自worturb-cite-seq实验的基因表达数据,以识别诱导特定细胞态过渡的最佳扰动;与几个基线相比,观察到所提出的因果方法可实现更好的样品效率。在这两种情况下,我们都认为因果知情的采集函数尤其优于现有标准,从而允许使用实验明显更少的最佳干预设计。
translated by 谷歌翻译
In this review, we discuss approaches for learning causal structure from data, also called causal discovery. In particular, we focus on approaches for learning directed acyclic graphs (DAGs) and various generalizations which allow for some variables to be unobserved in the available data. We devote special attention to two fundamental combinatorial aspects of causal structure learning. First, we discuss the structure of the search space over causal graphs. Second, we discuss the structure of equivalence classes over causal graphs, i.e., sets of graphs which represent what can be learned from observational data alone, and how these equivalence classes can be refined by adding interventional data.
translated by 谷歌翻译
本文表明,在某些情况下,由于使用屏障功能而产生的安全性覆盖不必要地受到限制。特别是,我们检查了固定翼碰撞的情况,并表明当使用屏障功能时,在某些情况下,两架固定翼飞机可能比根本没有屏障功能更接近碰撞。此外,我们构建了屏障功能将系统标记为不安全的情况,即使车辆开始任意分开。换句话说,屏障功能可确保安全,但具有不必要的性能成本。因此,我们引入了无模型的屏障功能,该功能采用数据驱动方法来创建屏障功能。我们证明了在两架固定翼飞机的碰撞避免模拟中,无模型屏障功能的有效性。
translated by 谷歌翻译